5 research outputs found

    Real time text localization for Indoor Mobile Robot Navigation

    Full text link
    Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. In this paper, a novel two-step text localization method for Indoor Mobile Robot Navigation is introduced. This method is based on morphological operators and machine learning techniques and can be used in real time environments. The proposed method has two steps. At First, a new set of morphological operators is applied with a particular sequence to extract high contrast areas that have high probability of text existence. Using of morphological operators has many advantages such as: high computation speed, being invariant to several geometrical transformations like translation, rotations, and scaling, and being able to extract all areas containing text. After extracting text candidate regions, a set of nine features are extracted for accurate detection and deletion of the regions that don't have text. These features are descriptors for texture properties and are computed in real time. Then, we use a SVM classifier to detect the existence of text in the region. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.Comment: 5 page

    An Efficient Evolutionary Based Method For Image Segmentation

    Full text link
    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the third layer, a meta-heuristic process uses two operators to connect the residual regions to their corresponding determined centers. In the final layer, an evolutionary algorithm is used to combine the resulted similar and neighbor regions. Different layers of the algorithm are totally independent, therefore for certain applications a specific layer can be changed without constraint of changing other layers. Some properties of this algorithm like the flexibility of its method, the ability to use different feature vectors for segmentation (grayscale, color, texture, etc), the ability to control uniformity and the number of final segments using free parameters and also maintaining small regions, makes it possible to apply the algorithm to different applications. Moreover, the independence of each region from other regions in the second layer, and the independence of centers in the third layer, makes parallel implementation possible. As a result the algorithm speed will increase. The presented algorithm was tested on a standard dataset (BSDS 300) of images, and the region boundaries were compared with different people segmentation contours. Results show the efficiency of the algorithm and its improvement to similar methods. As an instance, in 70% of tested images, results are better than ACT algorithm, besides in 100% of tested images, we had better results in comparison with VSP algorithm.Comment: 17 page

    An improvement on LSB+ method

    Full text link
    The Least Significant Bit (LSB) substitution is an old and simple data hiding method that could almost effortlessly be implemented in spatial or transform domain over any digital media. This method can be attacked by several steganalysis methods, because it detectably changes statistical and perceptual characteristics of the cover signal. A typical method for steganalysis of the LSB substitution is the histogram attack that attempts to diagnose anomalies in the cover image's histogram. A well-known method to stand the histogram attack is the LSB+ steganography that intentionally embeds some extra bits to make the histogram look natural. However, the LSB+ method still affects the perceptual and statistical characteristics of the cover signal. In this paper, we propose a new method for image steganography, called LSB++, which improves over the LSB+ image steganography by decreasing the amount of changes made to the perceptual and statistical attributes of the cover image. We identify some sensitive pixels affecting the signal characteristics, and then lock and keep them from the extra bit embedding process of the LSB+ method, by introducing a new embedding key. Evaluation results show that, without reducing the embedding capacity, our method can decrease potentially detectable changes caused by the embedding process.Comment: 6 pages, Journal of Iran Secure society (Monadi), 2010, issue 3. (In Persian Language

    A new adaptive method for hiding data in images

    Full text link
    LSB method is one of the well-known steganography methods which hides the message bits into the least significant bit of pixel values. This method changes the statistical information of images, which causes to have an unsecured channel. To increase the security of this method against the steganalysis methods, in this paper an adaptive method for hiding data into images will be proposed. So, the amount of data and the method which is used for hiding data in each area of image will be different. Experimental results show that the security of the proposed method is higher than general LSB method and in some cases the capacity of the carrier signal is increased.Comment: 6 pages, in Persian, Proceedings of the 6th Iranian Conference on Machine Vision and Image Processing, Tehran, Iran 201

    A novel recommendation system to match college events and groups to students

    Full text link
    With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users' keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website.Comment: 10 pages, AIAAT 2017, Hawaii, US
    corecore